翻訳と辞書
Words near each other
・ "O" Is for Outlaw
・ "O"-Jung.Ban.Hap.
・ "Ode-to-Napoleon" hexachord
・ "Oh Yeah!" Live
・ "Our Contemporary" regional art exhibition (Leningrad, 1975)
・ "P" Is for Peril
・ "Pimpernel" Smith
・ "Polish death camp" controversy
・ "Pro knigi" ("About books")
・ "Prosopa" Greek Television Awards
・ "Pussy Cats" Starring the Walkmen
・ "Q" Is for Quarry
・ "R" Is for Ricochet
・ "R" The King (2016 film)
・ "Rags" Ragland
・ ! (album)
・ ! (disambiguation)
・ !!
・ !!!
・ !!! (album)
・ !!Destroy-Oh-Boy!!
・ !Action Pact!
・ !Arriba! La Pachanga
・ !Hero
・ !Hero (album)
・ !Kung language
・ !Oka Tokat
・ !PAUS3
・ !T.O.O.H.!
・ !Women Art Revolution


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

importance sampling : ウィキペディア英語版
importance sampling
In statistics, importance sampling is a general technique for estimating properties of a particular distribution, while only having samples generated from a different distribution than the distribution of interest. It is related to umbrella sampling in computational physics. Depending on the application, the term may refer to the process of sampling from this alternative distribution, the process of inference, or both.
== Basic theory ==
Let X:\Omega\to \mathbb be a random variable in some probability space (\Omega,\mathcal,P). We wish to estimate the expected value of ''X'' under ''P'', denoted E(). If we have random samples x_1, \ldots, x_n, generated according to ''P'', then an empirical estimate of E() is
:
\widehat() = \frac \sum_^n x_i.

The basic idea of importance sampling is to change the probability measure ''P'' so that the estimation of E() is easier. Choose a random variable L\geq 0 such that E''()=1'' and that ''P''-almost everywhere L(\omega)\neq 0. With the variate ''L'' we define another probability P^:=L\, P that satisfies
:
\mathbf() = \mathbf\left().

The variable ''X/L'' will thus be sampled under ''P(L)'' to estimate \mathbf() as above. This procedure will improve the estimation when \operatorname\left() < \operatorname(). Another case of interest is when ''X/L'' is easier to sample under ''P(L)'' than ''X'' under ''P''.
When ''X'' is of constant sign over Ω, the best variable ''L'' would clearly be L^
*=\frac\geq 0, so that ''X/L
*'' is the searched constant E''()'' and a single sample under ''P(L
*)
'' suffices to give its value. Unfortunately we cannot take that choice, because E''()'' is precisely the value we are looking for! However this theoretical best case ''L
*'' gives us an insight into what importance sampling does:
:
\begin\forall a\in\mathbb, \; P^(X\in()) &= \int_dP(\omega) \\ &= \frac\; a\,P(X\in())
\end
to the right, a\,P(X\in()) is one of the infinitesimal elements that sum up to E''()'':
: E() = \int_^ a\,P(X\in())
therefore, a good probability change ''P(L)'' in importance sampling will redistribute the law of ''X'' so that its samples' frequencies are sorted directly according to their weights in E''()''. Hence the name "importance sampling."
Note that whenever P is the uniform distribution and \Omega =\mathbb, we are just estimating the integral of the real function X:\mathbb\to\mathbb, so the method can also be used for estimating simple integrals.


抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「importance sampling」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.